186 research outputs found

    Quantitative testing

    Get PDF
    We investigate the problem of specification based testing with dense sets of inputs and outputs, in particular with imprecision as they might occur due to errors in measurements, numerical instability or noisy channels. Using quantitative transition systems to describe implementations and specifications, we introduce implementation relations that capture a notion of correctness ā€œup to Īµā€, allowing deviations of implementation from the specification of at most Īµ. These quantitative implementation relations are described as Hausdorff distances between certain sets of traces. They are conservative extensions of the well-known ioco relation. We develop an on-line and an off-line algorithm to generate test cases from a requirement specification, modeled as a quantitative transition system. Both algorithms are shown to be sound and complete with respect to the quantitative implementation relations introduced

    Dependability and Survivability Evaluation of a Water Distribution Process with Arcade

    Get PDF
    Among others, drinking water belongs to the socalled critical infrastructures. To ensure that the water production meets current and future societal needs, a systematic and rigorous analysis is needed. In this paper, we report our ļ¬rst experience with dependability analysis of the last phase of a water treatment facility, namely the water distribution. We use the architectural language Arcade to model this facility and use the Arcade toolset to compute three relevant dependability measures: the availability of the water distribution, the reliability, i.e., the probability that the water distribution fails, and the survivability, that is, the ability to recover from disasters. Since survivability is not directly expressible in the Arcade formalism, we show how one can modify the toolchain for the analysis of survivability.\u

    Preface

    Get PDF

    Model-based Testing

    Get PDF
    This paper provides a comprehensive introduction to a framework for formal testing using labelled transition systems, based on an extension and reformulation of the ioco theory introduced by Tretmans. We introduce the underlying models needed to specify the requirements, and formalise the notion of test cases. We discuss conformance, and in particular the conformance relation ioco. For this relation we prove several interesting properties, and we provide algorithms to derive test cases (either in batches, or on the fly)

    Model checking Quantitative Linear Time Logic

    Get PDF
    This paper considers QLtl, a quantitative analagon of Ltl and presents algorithms for model checking QLtl over quantitative versions of Kripke structures and Markov chains

    A Semantic Framework for Test Coverage (Extended Version)

    Get PDF
    Since testing is inherently incomplete, test selection is of vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic characteristics of the implementation under test or its specification. Typical black-box coverage metrics are state and transition coverage of the specification. White-box testing often considers statement, condition and path coverage. A disadvantage of this syntactic approach is that different coverage figures are assigned to systems that are behaviorally equivalent, but syntactically different. Moreover, those coverage metrics do not take into account that certain failures are more severe than others, and that more testing effort should be devoted to uncover the most important bugs, while less critical system parts can be tested less thoroughly. This paper introduces a semantic approach to test coverage. Our starting point is a weighted fault model, which assigns a weight to each potential error in an implementation. We define a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight. Since our notions are semantic, they are insensitive to replacing a specification by one with equivalent behaviour.We present several algorithms that, given a certain minimality criterion, compute a minimal test suite with maximal coverage. These algorithms work on a syntactic representation of weighted fault models as fault automata. They are based on existing and novel optimization\ud problems. Finally, we illustrate our approach by analyzing and comparing a number of test suites for a chat protocol

    A Semantic Framework for Test Coverage

    Get PDF
    Since testing is inherently incomplete, test selection is of vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic characteristics of the implementation under test or its specification. Typical black-box coverage metrics are state and transition coverage of the specification. White-box testing often considers statement, condition and path coverage. A disadvantage of this syntactic approach is that different coverage figures are assigned to systems that are behaviorally equivalent, but syntactically different. Moreover, those coverage metrics do not take into account that certain failures are more severe than others, and that more testing effort should be devoted to uncover the most important bugs, while less critical system parts can be tested less thoroughly. This paper introduces a semantic approach to test coverage. Our starting point is a weighted fault model, which assigns a weight to each potential error in an implementation. We define a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight. Since our notions are semantic, they are insensitive to replacing a specification by one with equivalent behaviour.We present several algorithms that, given a certain minimality criterion, compute a minimal test suite with maximal coverage. These algorithms work on a syntactic representation of weighted fault models as fault automata. They are based on existing and novel optimization\ud problems. Finally, we illustrate our approach by analyzing and comparing a number of test suites for a chat protocol

    Linear and Branching System Metrics

    Get PDF
    We extend the classical system relations of trace\ud inclusion, trace equivalence, simulation, and bisimulation to a quantitative setting in which propositions are interpreted not as boolean values, but as elements of arbitrary metric spaces.\ud \ud Trace inclusion and equivalence give rise to asymmetrical and symmetrical linear distances, while simulation and bisimulation give rise to asymmetrical and symmetrical branching distances. We study the relationships among these distances, and we provide a full logical characterization of the distances in terms of quantitative versions of LTL and Ī¼-calculus. We show that, while trace inclusion (resp. equivalence) coincides with simulation (resp. bisimulation) for deterministic boolean transition systems, linear\ud and branching distances do not coincide for deterministic metric transition systems. Finally, we provide algorithms for computing the distances over finite systems, together with a matching lower complexity bound
    • ā€¦
    corecore